Goto

Collaborating Authors

 flask application


Development of a Legal Document AI-Chatbot

Devaraj, Pranav Nataraj, P, Rakesh Teja V, Gangrade, Aaryav, R, Manoj Kumar

arXiv.org Artificial Intelligence

With the exponential growth of digital data and the increasing complexity of legal documentation, there is a pressing need for efficient and intelligent tools to streamline the handling of legal documents.With the recent developments in the AI field, especially in chatbots, it cannot be ignored as a very compelling solution to this problem.An insight into the process of creating a Legal Documentation AI Chatbot with as many relevant features as possible within the given time frame is presented.The development of each component of the chatbot is presented in detail.Each component's workings and functionality has been discussed.Starting from the build of the Android app and the Langchain query processing code till the integration of both through a Flask backend and REST API methods.


Introduction to ML Deployment: Flask, Docker & Locust

#artificialintelligence

You've spent a lot of time on EDA, carefully crafted your features, tuned your model for days and finally have something that performs well on the test set. Now, my friend, we need to deploy the model. After all, any model that stays in the notebook has a value of zero, regardless of how good it is. It might feel overwhelming to learn this part of the data science workflow, especially if you don't have a lot of software engineering experience. Fear not, this post's main purpose is to get you started by introducing one of the most popular frameworks for deployment in Python -- Flask.


A Full End-to-End Deployment of a Machine Learning Algorithm into a Live Production Environment

#artificialintelligence

After the article was published I received feedback from readers who were interested in how to push production deployment further to explore how a machine learning algorithm could be fully deployed into a live production environment so that it could be "consumed" in a platform-agnostic way and that led to the idea for this article … The first step is to develop the machine learning algorithm that we want to deploy. In the real world this could involve many weeks or months of development time and lots of iteration across the steps of the data science pipeline but for this example I will develop a basic ML algorithm as the main purpose of this article is to find a way to deploy an algorithm for use by "consumers". At this point we can see that we have a machine learning algorithm trained to predict drug presriptions and that cross validation (i.e. We are going to deploy this model into a production environment and although it is a simple example we would not want to have to retrain our model in the live environment every time a user wanted to predict a drug presription, hence our next step is to preserve the state of our trained model using pickle ... Now whenever we want to use the trained model, we simply need to reload its state from the model.pkl And there we have it, a list of each categorical feature with the unique values that appear in the data and the corresponding numerical values as transformed by the LabelEncoder().


Deploying a Spotify Recommendation Model with Flask

#artificialintelligence

The real value of machine learning models lies in their usability. If the model is not properly deployed, used, and continuously updated through cycles of customer feedback, it is doomed to stay in a GitHub repository, never reaching its actual potential. In this article, we will learn how to deploy a Spotify Recommendation Model in Flask in a few simple steps. The application we will deploy is stored in a recommendation_app folder. In the root directory, we have the wsgi.py


A Full End-to-End Deployment of a Machine Learning Algorithm into a Live Production Environment

#artificialintelligence

After the article was published I received feedback from readers who were interested in how to push production deployment further to explore how a machine learning algorithm could be fully deployed into a live production environment so that it could be "consumed" in a platform-agnostic way and that led to the idea for this article … The first step is to develop the machine learning algorithm that we want to deploy. In the real world this could involve many weeks or months of development time and lots of iteration across the steps of the data science pipeline but for this example I will develop a basic ML algorithm as the main purpose of this article is to find a way to deploy an algorithm for use by "consumers". At this point we can see that we have a machine learning algorithm trained to predict drug presriptions and that cross validation (i.e. We are going to deploy this model into a production environment and although it is a simple example we would not want to have to retrain our model in the live environment every time a user wanted to predict a drug presription, hence our next step is to preserve the state of our trained model using pickle ... Now whenever we want to use the trained model, we simply need to reload its state from the model.pkl And there we have it, a list of each categorical feature with the unique values that appear in the data and the corresponding numerical values as transformed by the LabelEncoder().


Model Deployment

#artificialintelligence

Image Classification is a pivotal pillar when it comes to the healthy functioning of Social Media. Classifying content on the basis of certain tags are in lieu of various laws and regulations. It becomes important so as to hide content from a certain set of audiences. I regularly encounter posts with a "Sensitive Content" on some of the images while scrolling on my Instagram feed. I am sure you must have too.


We'll Do It Live: Updating Machine Learning Models on Flask/uWSGI with No Downtime

#artificialintelligence

Flask is one of the most popular REST API frameworks used for hosting machine learning (ML) models. The choice is heavily influenced by a data science team's expertise in Python and the reusability of training assets built in Python. At WW, Flask is used extensively by the data science team to serve predictions from various ML models. However, there are a few considerations that need to be made before a Flask application is production-ready. If Flask code isn't modified to run asynchronously, it only can run one request per process at a time.


Creating REST API for TensorFlow models – Becoming Human: Artificial Intelligence Magazine

@machinelearnbot

A while ago I wrote about Machine Learning model deployment with TensorFlow Serving. The main advantage of that approach, in my opinion, is a performance (thanks to gRPC and Protobufs) and direct use of classes generated from Protobufs instead of manual creation of JSON objects. The client calls a server as they were parts of the same program. That makes the code easy to understand and maintain. Now we host our model somewhere (for instance here) and can talk to it over gRPC using our special client.